Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Artificial intelligence (AI) synthesized faces—so called deepfake images—have been increasingly used for malicious intent and have resulted in prominently adverse impact. Because online users must contend with discerning fake from real, great emphasis has been placed on enhancing human detection of deepfake images. We conducted an online human-subject study (N= 237), investigating the effect of three training strategies (explicit training with visible artifacts in synthetic faces, implicit training with experiencing the generation of synthetic faces using real human faces, and a combination of both artifact and generation) on participants’ detection of synthetic faces generated by the state-of-the-art StyleGAN techniques. Comparing participants’ deepfake detection across three phases (baseline in phase 1 without any training, phase 2 after one training session, and phase 3 after the other training session), we found that all training strategies effectively enhanced participants’ detection of AI-synthesized faces and their decision confidence. We also explored factors that impact participants’ learning and decision-making of deepfake detection. Responses to the open-ended question revealed that participants developed generalized strategies and utilized artifacts beyond the training. Our quantitative and qualitative results provide nuanced insights into the promises and limitations of the training strategies. In addition to advancing theoretical understanding of human training in the context of deepfake image detection, our study findings hold practical implications for interface design.more » « less
-
Recently, deepfake techniques have been adopted by real-world adversaries to fabricate believable personas (posing as experts or insiders) in disinformation campaigns to promote false narratives and deceive the public. In this paper, we investigate how fake personas influence the user perception of the disinformation shared by such accounts. Using Twitter as an exemplary platform, we conduct a user study (N=417) where participants read tweets of fake news with (and without) the presence of the tweet authors' profiles. Our study examines and compares three types of fake profiles: deepfake profiles, profiles of relevant organizations, and simple bot profiles. Our results highlight the significant impact of deepfake and organization profiles on increasing the perceived information accuracy of and engagement with fake news. Moreover, deepfake profiles are rated as significantly more real than other profile types. Finally, we observe that users may like/reply/share a tweet even though they believe it was inaccurate (e.g., for fun or truth-seeking), which could further disseminate false information. We then discuss the implications of our findings and directions for future research.more » « less
An official website of the United States government

Full Text Available